False positive rate

The proportion of actual negative examples for which the model mistakenly predicted the positive class. The following formula calculates the false positive rate:1

incorrectly classified actual negativesall actual negatives\frac{\text{incorrectly classified actual negatives}}{\text{all actual negatives}}

which means:

FPR=FPFP+TN\text{FPR} = \frac{FP}{FP + TN}

The false positive rate is the x-axis in an ROC curve.

When (not) to use

Use when false positives are more expensive than false negatives.2

In an imbalanced dataset where the number of actual negatives is very, very low, say 1-2 examples in total, FPR is less meaningful and less useful as a metric.2

See also

  • Recall (a.k.a. probability of detection, where the FPR is known as probability of false alarm)
  • Accuracy
  • Precision

Footnotes

  1. developers.google.com/machine-learning/glossary#FP_rate

  2. ML crash course - Classification 2

2024 © ak